17. Object Recognition

Object Recognition

Your RViz output will look something like this.  Need to keep working on my classifier!

Your RViz output will look something like this. Need to keep working on my classifier!

If everything went well you now have a trained classifier and you're ready to do object recognition! First you have to build out your node for segmenting your point cloud. This is where you'll bring in your code from Exercise 1 and Exercise 2.

Make yourself a copy of the template.py file in the sensor_stick/scripts/ directory and call it something like object_recognition.py. Inside this file, you'll find all the TODO's from Exercises 1 and 2 and you can simply copy and paste your code in there from the previous exercises.

In terms of new code, you'll need to add the following bits to your pcl_callback() function where the Exercise-3 TODOs are:

First, create some empty lists to receive labels and object point clouds:

    # Classify the clusters!
    detected_objects_labels = []
    detected_objects = []

Next, write a for loop to cycle through each of the segmented clusters. Alternatively, you could just build on the loop you created for coloring the clusters in Exercise-2. In either case:

    for index, pts_list in enumerate(cluster_indices):
        # Grab the points for the cluster from the extracted outliers (cloud_objects)
        pcl_cluster = cloud_objects.extract(pts_list)
        # TODO: convert the cluster from pcl to ROS using helper function

        # Extract histogram features
        # TODO: complete this step just as is covered in capture_features.py

        # Make the prediction, retrieve the label for the result
        # and add it to detected_objects_labels list
        prediction = clf.predict(scaler.transform(feature.reshape(1,-1)))
        label = encoder.inverse_transform(prediction)[0]
        detected_objects_labels.append(label)

        # Publish a label into RViz
        label_pos = list(white_cloud[pts_list[0]])
        label_pos[2] += .4
        object_markers_pub.publish(make_label(label,label_pos, index))

        # Add the detected object to the list of detected objects.
        do = DetectedObject()
        do.label = label
        do.cloud = ros_cluster
        detected_objects.append(do)

    rospy.loginfo('Detected {} objects: {}'.format(len(detected_objects_labels), detected_objects_labels))

    # Publish the list of detected objects
    # This is the output you'll need to complete the upcoming project!
    detected_objects_pub.publish(detected_objects)

Below that, in the section that begins with if __name__ == '__main__':, add the following code to create some new publishers and read in your trained model:

    # Create Publishers
    # TODO: here you need to create two publishers
    # Call them object_markers_pub and detected_objects_pub
    # Have them publish to "/object_markers" and "/detected_objects" with 
    # Message Types "Marker" and "DetectedObjectsArray" , respectively

    # Load Model From disk
    model = pickle.load(open('model.sav', 'rb'))
    clf = model['classifier']
    encoder = LabelEncoder()
    encoder.classes_ = model['classes']
    scaler = model['scaler']

You should now be all set to do object recognition! At this point, if you haven't already, you can shut down your training environment and bring up the tabletop scene again:

$ roslaunch sensor_stick robot_spawn.launch

In another terminal, run your object recognition node (in /scripts if that's where you're working but keep in mind your model.sav file needs to be in the same directory where you run this!):

$ chmod +x object_recognition.py
$ ./object_recognition.py

And you should now see markers in RViz associated with each segmented object!

My classifier needs more help!

My classifier needs more help!